DESCRIPTION
The value of Error Guessing ( EG) lies in the unexpected: tests made up by guessing would otherwise not be considered.
Based on the tester’s experience, he goes in search of defect-sensitive spots in the system and devises suitable test
cases for these.
Experience here is a broad concept: it could be the professional tester who ‘smells’ the defects in certain complex
screen processes, but it could also be the user or administrator who knows the exceptional situations from practice and
wishes to test whether the new or amended system is dealing with them adequately.
Together with exploratory testing (section Exploratory Testing (ET)), error guessing is rather a strange technique among test
design techniques. Neither technique is based on any of the described basic techniques and therefore does not provide
any specifiable coverage.
This very informal technique leaves the tester free to design the test cases in advance or to create them on the spot
during the test execution. Documenting the test cases is optional. A point of focus when they are not documented is the
reproducibility of the test. The tester often cannot quite remember under which circumstances a defect occurred. A
possible measure for this is the taking of notes (a ‘test log’) during the test. Obviously, defects found with the test
are documented. In those cases, great attention should be paid to the circumstances that have led to the defect, so
that it will be reproducible.
Tips:
An aid for reproducing a defect is the activation of logging during the test, documenting the actions of
the tester. A tool for automated test execution can be used for this. The considerable freedom of the technique makes
the area of application very broad. Error guessing can be applied to the testing of every quality characteristic and to
every form of test basis.
Error guessing can also be used, for example, to instil (or restore) users’ or administrators’ confidence in a system,
in the spirit of ‘If these situations are processed satisfactorily, then the rest will probably also be all
right’.
Error guessing is sometimes confused with exploratory testing (section Exploratory Testing (ET)). The table 1 sums up the differences:
Error guessing
|
Exploratory
testing
|
Does not employ the basic
techniques
|
Employs the most suitable basic
technique, depending on the situation
|
Suitable for testers, users,
administrators, etc.
|
Suitable for experienced testers with
knowledge of the basic techniques
|
The test cases are designed in
the
Specification phase or during test
execution
|
The test cases are designed during
test execution
|
Focuses on the exceptions and
difficult situations
|
Focuses on the aspect to be tested in
total (screen, function)
|
Not systematic, no certainty at all
concerning coverage
|
Somewhat systematic
|
Table 1: Differences between Error Guessing and Exploratory Testing.
In practice, error guessing is often cited for the applied test technique in the absence of a better name - ‘It’s not a
common test design technique, therefore it is error guessing’. In particular, the testing of business processes by
users, or the testing of requirements is often referred to as error guessing. However, the basic technique of
“checklist” is used here, while with error guessing no specific basic technique is used.
The fact that tests are executed that otherwise would not be considered makes error guessing a valuable addition to the
other test design techniques. However, since error guessing guarantees no coverage whatsoever, it is not a replacement.
By preference error guessing takes place later in the total testing process, when most normal and simple defects have
already been removed with the regular techniques. Because of this, error guessing can focus on testing the real exceptions
and difficult situations. From within the test strategy, the test manager would normally provide some direction to the aim
of error guessing, so that duplication of other tests can be avoided. The test manager also makes a certain amount of time
(time box) and resources available.
POINTS OF FOCUS IN THE STEPS
The steps can be performed both during the Specification phase and during the test execution. The tester usually does not
document the results of the steps, but if great value is attached to showing evidence or transferability and reusability of
the test, then this should be done.
1 - Identifying test situations
Prior to test execution, the tester identifies the weak points on which the test should focus. These are often mistakes
in the thought processes of others and things that have been forgotten. These aspects form the basis of the test cases
to be executed. Examples are:
-
Exceptional situations - rare situations in the system operation, screen processing or business and other
processes
-
Fault handling - forcing a fault situation during the handling of another fault situation, interrupting a
process unexpectedly, etc.
-
Non-permitted input - negative amounts, zeros, excessive values, too-long names, empty (mandatory) fields,
etc. (only useful if there is no syntactic test carried out on this part)
-
Specific combinations, for example in the area of:
-
Data: an as-yet untried combination of input values
-
Sequence of transactions: e.g. “change – cancel change – change again – cancel – etc.” a number of times in
succession
-
Claiming too much of the system resources (memory, disk space, network)
-
Complex parts of the system
-
Often-changed parts of the system
-
Parts (processes/functions) of the system that often contained defects in the past.
2 - Creating logical test cases
This step normally takes place only with more complex test cases. The tester may consider creating a logical test case
that will cover the situation to be tested.
3 - Creating physical test cases
This step normally only takes place with more complex test cases. The tester may consider creating a physical test case
for the logical test case.
4 - Establishing the starting point
During this activity, it may emerge that it is necessary to build a particular starting point for purposes of the test.
|